7 research outputs found

    Chasing a Better Decision Margin for Discriminative Histopathological Breast Cancer Image Classification

    Get PDF
    When considering a large dataset of histopathologic breast images captured at various magnification levels, the process of distinguishing between benign and malignant cancer from these images can be time-intensive. The automation of histopathological breast cancer image classification holds significant promise for expediting pathology diagnoses and reducing the analysis time. Convolutional neural networks (CNNs) have recently gained traction for their ability to more accurately classify histopathological breast cancer images. CNNs excel at extracting distinctive features that emphasize semantic information. However, traditional CNNs employing the softmax loss function often struggle to achieve the necessary discriminatory power for this task. To address this challenge, a set of angular margin-based softmax loss functions have emerged, including angular softmax (A-Softmax), large margin cosine loss (CosFace), and additive angular margin (ArcFace), each sharing a common objective: maximizing inter-class variation while minimizing intra-class variation. This study delves into these three loss functions and their potential to extract distinguishing features while expanding the decision boundary between classes. Rigorous experimentation on a well-established histopathological breast cancer image dataset, BreakHis, has been conducted. As per the results, it is evident that CosFace focuses on augmenting the differences between classes, while A-Softmax and ArcFace tend to emphasize augmenting within-class variations. These observations underscore the efficacy of margin penalties on angular softmax losses in enhancing feature discrimination within the embedding space. These loss functions consistently outperform softmax-based techniques, either by widening the gaps among classes or enhancing the compactness of individual classes

    Deep Learning with Discriminative Margin Loss for Cross-Domain Consumer-to-Shop Clothes Retrieval

    Get PDF
    Consumer-to-shop clothes retrieval refers to the problem of matching photos taken by customers with their counterparts in the shop. Due to some problems, such as a large number of clothing categories, different appearances of clothing items due to different camera angles and shooting conditions, different background environments, and different body postures, the retrieval accuracy of traditional consumer-to-shop models is always low. With advances in convolutional neural networks (CNNs), the accuracy of garment retrieval has been significantly improved. Most approaches addressing this problem use single CNNs in conjunction with a softmax loss function to extract discriminative features. In the fashion domain, negative pairs can have small or large visual differences that make it difficult to minimize intraclass variance and maximize interclass variance with softmax. Margin-based softmax losses such as Additive Margin-Softmax (aka CosFace) improve the discriminative power of the original softmax loss, but since they consider the same margin for the positive and negative pairs, they are not suitable for cross-domain fashion search. In this work, we introduce the cross-domain discriminative margin loss (DML) to deal with the large variability of negative pairs in fashion. DML learns two different margins for positive and negative pairs such that the negative margin is larger than the positive margin, which provides stronger intraclass reduction for negative pairs. The experiments conducted on publicly available fashion datasets DARN and two benchmarks of the DeepFashion dataset—(1) Consumer-to-Shop Clothes Retrieval and (2) InShop Clothes Retrieval—confirm that the proposed loss function not only outperforms the existing loss functions but also achieves the best performance

    Optimierung der auf Deep Learning basierenden Klassifizierung von Pflanzenkrankheiten mit CBAM

    Get PDF
    In recent years, deep learning-based plant disease classification has been widely developed. However, it is challenging to collect sufficient annotated image data to effectively train deep learning models for plant disease recognition. The attention mechanism in deep learning assists the model to focus on the informative data segments and extract the discriminative features of inputs to enhance training performance. This paper investigates the Convolutional Block Attention Module (CBAM) to improve classification with CNNs, which is a lightweight attention module that can be plugged into any CNN architecture with negligible overhead. Specifically, CBAM is applied to the output feature map of CNNs to highlight important local regions and extract more discriminative features. Well-known CNN models (i.e. EfficientNetB0, MobileNetV2, ResNet50, InceptionV3, and VGG19) were applied to do transfer learning for plant disease classification and then fine-tuned by a publicly available plant disease dataset of foliar diseases in pear trees called DiaMOS Plant. Amongst others, this dataset contains 3006 images of leaves affected by different stress symptoms. Among the tested CNNs, EfficientNetB0 has shown the best performance. EfficientNetB0+CBAM has outperformed EfficientNetB0 and obtained 86.89% classification accuracy. Experimental results show the effectiveness of the attention mechanism to improve the recognition accuracy of pre-trained CNNs when there are few training data

    Chasing a Better Decision Margin for Discriminative Histopathological Breast Cancer Image Classification

    Get PDF
    When considering a large dataset of histopathologic breast images captured at various magnification levels, the process of distinguishing between benign and malignant cancer from these images can be time-intensive. The automation of histopathological breast cancer image classification holds significant promise for expediting pathology diagnoses and reducing the analysis time. Convolutional neural networks (CNNs) have recently gained traction for their ability to more accurately classify histopathological breast cancer images. CNNs excel at extracting distinctive features that emphasize semantic information. However, traditional CNNs employing the softmax loss function often struggle to achieve the necessary discriminatory power for this task. To address this challenge, a set of angular margin-based softmax loss functions have emerged, including angular softmax (A-Softmax), large margin cosine loss (CosFace), and additive angular margin (ArcFace), each sharing a common objective: maximizing inter-class variation while minimizing intra-class variation. This study delves into these three loss functions and their potential to extract distinguishing features while expanding the decision boundary between classes. Rigorous experimentation on a well-established histopathological breast cancer image dataset, BreakHis, has been conducted. As per the results, it is evident that CosFace focuses on augmenting the differences between classes, while A-Softmax and ArcFace tend to emphasize augmenting within-class variations. These observations underscore the efficacy of margin penalties on angular softmax losses in enhancing feature discrimination within the embedding space. These loss functions consistently outperform softmax-based techniques, either by widening the gaps among classes or enhancing the compactness of individual classes.This work is partially supported by the project GUI19/027 and by the grant PID2021-126701OB-I00 funded by MCIN/AEI/10.13039/501100011033 and by “ERDF A way of making Europe”
    corecore